Reachability in continuous-time Markov reward decision processes

نویسندگان

  • Christel Baier
  • Boudewijn R. Haverkort
  • Holger Hermanns
  • Joost-Pieter Katoen
چکیده

Continuous-time Markov decision processes (CTMDPs) are widely used for the control of queueing systems, epidemic and manufacturing processes. Various results on optimal schedulers for discounted and average reward optimality criteria in CTMDPs are known, but the typical game-theoretic winning objectives have received scant attention so far. This paper studies various sorts of reachability objectives for CTMDPs. Memoryless schedulers are optimal for simple reachability objectives as it suffices to consider the embedded MDP. Schedulers that may count the number of visits to states are optimal— when restricting to time-abstract schedulers—for timed reachability in uniform CTMDPs. The central result is that for any CTMDP, reward reachability objectives are dual to timed ones. As a corollary, ǫ-optimal schedulers for reward reachability objectives in uniform CTMDPs can be obtained in polynomial time using a simple backward greedy algorithm. 2 C. Baier, B. R. Haverkort, H. Hermanns, J.-P. Katoen

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Maximal Cost-Bounded Reachability Probability on Continuous-Time Markov Decision Processes

In this paper, we consider multi-dimensional maximal cost-bounded reachability probability over continuous-time Markov decision processes (CTMDPs). Our major contributions are as follows. Firstly, we derive an integral characterization which states that the maximal cost-bounded reachability probability function is the least fixed-point of a system of integral equations. Secondly, we prove that ...

متن کامل

Efficient Computation of Time-Bounded Reachability Probabilities in Uniform Continuous-Time Markov Decision Processes

A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a given time bound in a uniform CTMDP, i.e., a CTMDP in which the delay time distribution per stat...

متن کامل

Computing Quantiles in Markov Reward Models

Probabilistic model checking mainly concentrates on techniques for reasoning about the probabilities of certain path properties or expected values of certain random variables. For the quantitative system analysis, however, there is also another type of interesting performance measure, namely quantiles. A typical quantile query takes as input a lower probability bound p ∈ ]0, 1] and a reachabili...

متن کامل

Time-Bounded Reachability in Continuous-Time Markov Decision Processes

This paper solves the problem of computing the maximum and minimum probability to reach a set of goal states within a given time bound for locally uniform continuous-time Markov decision processes (CTMDPs). As this model allows for nondeterministic choices between exponentially delayed transitions, we define total time positional (TTP) schedulers which rely on the CTMDP’s current state and the ...

متن کامل

Analysis and scheduler synthesis of time - bounded reachability in continuous - time Markov decision processes

Continuous-time Markov decision processes (CTMDPs) are stochastic models in which probabilistic and nondeterministic choices co-exist. Lately, a discretization technique has been developed to compute time-bounded reachability probabilities in locally uniform CTMDPs, i.e. CTMDPs with state-wise constant sojourn-times. We extend the underlying value iteration algorithm, such that it computes an -...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008